翻訳と辞書
Words near each other
・ "O" Is for Outlaw
・ "O"-Jung.Ban.Hap.
・ "Ode-to-Napoleon" hexachord
・ "Oh Yeah!" Live
・ "Our Contemporary" regional art exhibition (Leningrad, 1975)
・ "P" Is for Peril
・ "Pimpernel" Smith
・ "Polish death camp" controversy
・ "Pro knigi" ("About books")
・ "Prosopa" Greek Television Awards
・ "Pussy Cats" Starring the Walkmen
・ "Q" Is for Quarry
・ "R" Is for Ricochet
・ "R" The King (2016 film)
・ "Rags" Ragland
・ ! (album)
・ ! (disambiguation)
・ !!
・ !!!
・ !!! (album)
・ !!Destroy-Oh-Boy!!
・ !Action Pact!
・ !Arriba! La Pachanga
・ !Hero
・ !Hero (album)
・ !Kung language
・ !Oka Tokat
・ !PAUS3
・ !T.O.O.H.!
・ !Women Art Revolution


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Schwarz criterion : ウィキペディア英語版
Bayesian information criterion

In statistics, the Bayesian information criterion (BIC) or Schwarz criterion (also SBC, SBIC) is a criterion for model selection among a finite set of models; the model with the lowest BIC is preferred. It is based, in part, on the likelihood function and it is closely related to the Akaike information criterion (AIC).
When fitting models, it is possible to increase the likelihood by adding parameters, but doing so may result in overfitting. Both BIC and AIC resolve this problem by introducing a penalty term for the number of parameters in the model; the penalty term is larger in BIC than in AIC.
The BIC was developed by Gideon E. Schwarz and published in a 1978 paper,〔.〕 where he gave a Bayesian argument for adopting it.
== Definition ==
The BIC is formally defined as
: \mathrm = . \
where
*x = the observed data;
*\theta = the parameters of the model;
*n = the number of data points in x, the number of observations, or equivalently, the sample size;
*k = the number of free parameters to be estimated. If the model under consideration is a linear regression, k is the number of regressors, including the intercept;
*\hat L = the maximized value of the likelihood function of the model M, i.e. \hat L=p(x|\hat\theta,M), where \hat\theta are the parameter values that maximize the likelihood function.
The BIC is an asymptotic result derived under the assumptions that the data distribution is in the exponential family.
That is, the integral of the likelihood function p(x|\theta,M) times the prior probability distribution p(\theta|M) over the parameters \theta of the model M for fixed observed data x
is approximated as
: = . \
For large n, this can be approximated by the formula given above.
The BIC is used in model selection problems where adding a constant to the BIC does not change the result.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Bayesian information criterion」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.